single index model
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.05)
- North America > United States (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
Statistical-Computational Tradeoff in Single Index Models
We study the statistical-computational tradeoffs in a high dimensional single index model $Y=f(X^\top\beta^*) +\epsilon$, where $f$ is unknown, $X$ is a Gaussian vector and $\beta^*$ is $s$-sparse with unit norm. When $\cov(Y,X^\top\beta^*)\neq 0$, \cite{plan2016generalized} shows that the direction and support of $\beta^*$ can be recovered using a generalized version of Lasso. In this paper, we investigate the case when this critical assumption fails to hold, where the problem becomes considerably harder. Using the statistical query model to characterize the computational cost of an algorithm, we show that when $\cov(Y,X^\top\beta^*)=0$ and $\cov(Y,(X^\top\beta^*)^2)> 0$, no computationally tractable algorithms can achieve the information-theoretic limit of the minimax risk. This implies that one must pay an extra computational cost for the nonlinearity involved in the model.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Workflow (0.46)
- Research Report > New Finding (0.46)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Workflow (0.46)
- Research Report > New Finding (0.46)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.05)
- North America > United States (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Singapore (0.04)
Comparing Model-agnostic Feature Selection Methods through Relative Efficiency
Zheng, Chenghui, Raskutti, Garvesh
Feature selection and importance estimation in a model-agnostic setting is an ongoing challenge of significant interest. Wrapper methods are commonly used because they are typically model-agnostic, even though they are computationally intensive. In this paper, we focus on feature selection methods related to the Generalized Covariance Measure (GCM) and Leave-One-Covariate-Out (LOCO) estimation, and provide a comparison based on relative efficiency. In particular, we present a theoretical comparison under three model settings: linear models, non-linear additive models, and single index models that mimic a single-layer neural network. We complement this with extensive simulations and real data examples. Our theoretical results, along with empirical findings, demonstrate that GCM-related methods generally outperform LOCO under suitable regularity conditions. Furthermore, we quantify the asymptotic relative efficiency of these approaches. Our simulations and real data analysis include widely used machine learning methods such as neural networks and gradient boosting trees.
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- Europe > Switzerland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)